Goto

Collaborating Authors

 international ai safety report


U.S. Withholds Support From Major International AI Safety Report

TIME - Tech

Booth is a reporter at TIME. Yoshua Bengio testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee on July 25, 2023. Yoshua Bengio testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee on July 25, 2023. Booth is a reporter at TIME. Artificial intelligence is improving faster than many experts anticipated, and the evidence for several risks has "grown substantially."


International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty

Scholefield, Rebecca, Martin, Samuel, Barten, Otto

arXiv.org Artificial Intelligence

The malicious use or malfunction of advanced general-purpose AI (GPAI) poses risks that, according to leading experts, could lead to the 'marginalisation or extinction of humanity.' To address these risks, there are an increasing number of proposals for international agreements on AI safety. In this paper, we review recent (2023-) proposals, identifying areas of consensus and disagreement, and drawing on related literature to assess their feasibility. We focus our discussion on risk thresholds, regulations, types of international agreement and five related processes: building scientific consensus, standardisation, auditing, verification and incentivisation. Based on this review, we propose a treaty establishing a compute threshold above which development requires rigorous oversight. This treaty would mandate complementary audits of models, information security and governance practices, overseen by an international network of AI Safety Institutes (AISIs) with authority to pause development if risks are unacceptable. Our approach combines immediately implementable measures with a flexible structure that can adapt to ongoing research.


AI Safety is Stuck in Technical Terms -- A System Safety Response to the International AI Safety Report

Dobbe, Roel

arXiv.org Artificial Intelligence

Safety has become the central value around which dominant AI governance efforts are being shaped. Recently, this culminated in the publication of the International AI Safety Report, written by 96 experts of which 30 nominated by the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN). The report focuses on the safety risks of general-purpose AI and available technical mitigation approaches. In this response, informed by a system safety perspective, I refl ect on the key conclusions of the report, identifying fundamental issues in the currently dominant technical framing of AI safety and how this frustrates meaningful discourse and policy efforts to address safety comprehensively. The system safety discipline has dealt with the safety risks of software-based systems for many decades, and understands safety risks in AI systems as sociotechnical and requiring consideration of technical and non-technical factors and their interactions. The International AI Safety report does identify the need for system safety approaches. Lessons, concepts and methods from system safety indeed provide an important blueprint for overcoming current shortcomings in technical approaches by integrating rather than adding on non-technical factors and interventions. I conclude with why building a system safety discipline can help us overcome limitations in the European AI Act, as well as how the discipline can help shape sustainable investments into Public Interest AI.

  Country:
  Genre: Research Report (0.69)
  Industry: Government > Regional Government (0.49)

What International AI Safety report says on jobs, climate, cyberwar and more

The Guardian > Energy

In a section on "labour market risks", the report warns that the impact on jobs will "likely be profound", particularly if AI agents – tools that can carry out tasks without human intervention – become highly capable. "General-purpose AI, especially if it continues to advance rapidly, has the potential to automate a very wide range of tasks, which could have a significant effect on the labour market. This means that many people could lose their current jobs," says the report. The report adds that many economists believe job losses could be offset by the creation of new jobs or demand from sectors not touched by automation. According to the International Monetary Fund, about 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs may be negatively affected.